专利摘要:
PRIORITIZATION AND DESIGNATION MANAGER FOR AN INTEGRATED TEST PLATFORM The present invention relates to a method of prioritizing and assigning test scripts that is provided on an integrated testing platform, where the testing platform is configured to organize, manage and promote the debugging test scripts prepared by test individual. Test scripts are used in testing software modules. The method includes receiving a plurality of test scripts, applying a predetermined set of factors to each test script, and assigning a weighting value to each factor based on the relative importance of the factor. A priority value is established for each test script based on the weighted factors corresponding to the test script, and the test script is assigned to a queue position to run based on the corresponding priority value, where the designated test script it is associated with one or more predisposing factors. The test script is then selected from the test queue and sent to a test subject if the predisposition factors indicate that the test script's requirements match the corresponding predisposition factors (...).
公开号:BR102012008540B1
申请号:R102012008540-2
申请日:2012-04-11
公开日:2020-11-17
发明作者:Julian M. Brown;Peter J. Smith;Stephen M. Williams;Jason A. Steele
申请人:Accenture Global Services Limited;
IPC主号:
专利说明:

Priority Claim
[001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 61 / 474,516 filed on April 12, 2011, which is incorporated in its entirety in this document by reference. Background of the Invention 1. Technical Field.
[002] This description concerns software testing and, in particular, this disclosure concerns an integrated platform to develop, debug and run tests to ensure the integrity and functionality of software systems. 2. Background.
[003] The development of computer software involves a rigorous testing process to ensure that the software works as intended. During the testing process, testers write several test scripts to perform the different types of tests needed to ensure that the computer software is functioning as designed. Testers also configure and run test scripts while tracking results, and report the test result to appropriate personnel. This process is inefficient and time consuming, and requires significant tester involvement.
[004] Additionally, as businesses continue to rely on complex computer software and software packages, an increasing number of highly complex computer software has been developed to satisfy business demands. Because of the increased complexity and scale, such software requires a large-scale testing process involving many more testers and test scripts than were previously required. Such increases are related to organizations centralizing their testing and switching to a third-party testing model. Traditionally testing was 'embedded' in the systems development lifecycle (SDLC) for each project, but now central 'distinct' testing functions exist within organizations, which test across multiple projects and releases.
[005] Testing tools have been developed to help testers perform the various steps of the testing process. However, existing testing tools are not able to provide the functionality and efficiency required to overcome the challenges posed by the large-scale testing process.
[006] Testing of various products and / or software products has increased in complexity and scope. In the past, relatively small groups of designers and developers, perhaps 10 to 30 in number, developed various tests to test and verify the function of software modules or code segments. Such small groups of individuals were manageable. However, as the number of individuals contributing to the project becomes large, redundancy and complexity increase, which contributes to increased cost and an increase in the number of errors. Therefore, there is a need to address the problems noted earlier. summary
[007] The next generation test system (NGT) provides a managed services platform for centralized development, debugging and implementation of software testing, where hundreds to perhaps thousands of individuals can collaborate in the development and implementation of a very large set of test modules or scripts that form a set of test programs. The next generation test system is not limited to testing only software modules, and can be used for hardware testing as well, as long as signals and indicators of test results that reflect the state of the hardware are provided to the system. test.
[008] For example, the next generation test system can be used by an organization or software development house to test and verify the function and operation of a large application or software package, or set of applications, such as a accounting system, a billing system, an operating system version release, or any other system. The next generation test system can be used in a testing "factory" where many hundreds of individuals perform final tests or quality control tests on the same or similar products, for example, a PC operating system test before release.
[009] The next generation test system can be used to develop and debug the tests, and can also be used to implement the final test procedures to verify the final release or quality control of an actual product undergoing tests prior to expedition. The next generation test system can be used to a) plan and develop the testing of a product for release, b) plan and estimate the effort or manpower required to develop the test process, c) manage the preparation process , d) manage the distribution of test scripts to test personnel, and e) automate the testing process.
[0010] A method of prioritizing and assigning test scripts is provided on an integrated testing platform, where the testing platform is configured to organize, manage and promote the debugging of test scripts prepared by a test individual. The method includes receiving a plurality of test scripts, applying a predetermined set of factors to each test script, and assigning a weighting value to each factor based on the relative importance of the factor. A priority value is established for each test script based on the weighted factors corresponding to the test script, and the test script is assigned to a queue position to run based on the corresponding priority value, where the designated test script it is associated with one or more predisposing factors. A selected test script is then identified in the test queue and sent to a test subject if the predisposition factors indicate that the test script's requirements match the test subject's corresponding predisposition factors. The test script can be assigned in real time when the tester clicks on a 'get next' icon.
[0011] Other modalities of systems, methods, resources and their corresponding advantages will be or will become apparent to those skilled in the art by examining the following figures and detailed description. All such systems, methods, resources and additional advantages are considered to be included in this description, are within the scope of the invention and are protected by the following claims. Brief Description of Drawings
[0012] The system can be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being given to illustrate the principles of the invention. In addition, in the figures, equal reference numbers designate corresponding parts across all different views.
[0013] Figure 1 is a high level block diagram showing a specific modality of the primary components of a next generation test system;
[0014] Figure 2 is a pictorial diagram showing additional features of the next generation test system;
[0015] Figure 3 is a pictorial diagram showing data flow and operational processes of the next generation test system;
[0016] Figure 4 is a pictorial diagram showing prioritization of test cases in a specific modality of a prioritization and designation manager.
[0017] Figure 5 is a screen capture of a specific modality of a prioritization and designation manager for a next generation test system.
[0018] Figure 6 is a high level block diagram showing the environment in which the next generation test system operates;
[0019] Figure 7 is a high level block diagram of a computer system;
[0020] Figure 8 is a pictorial diagram of a modality of the NGT system;
[0021] Figure 9 is a pictorial diagram of a modality of the NGT system;
[0022] Figure 10 is a diagram of high-level hardware blocks from another modality of the NGT system. Detailed Description of Preferred Modalities
[0023] Figure 1 is a high level block diagram showing eight components of a 100 next generation test system, which includes a test planning tool 110, a modular script designer 120, a prioritization manager and designation (PAM) 130, a test execution toolbar 140, an automation controller 150, a test data supply chain controller 160, a reporting portal 170 and the defect management tool 180. The system Next generation 100 test software can be a set of tool programs that are integrated with existing or underlying basic test tools. Thus, the next generation test system 100 does not necessarily replace existing management and development tools, but instead expands and extends the capacity of such existing tools. The next generation 100 test system acts as a layer on top of existing management and development tools.
[0024] Figure 2 is a diagram of a global test process using the next generation test system 100. The test process can include a test planning stage 202, a test preparation stage 204 and a testing stage. test run 206. Transitioning from test planning stage 202 to test preparation stage 204, and from test preparation stage 204 to test execution stage 206 may involve work assignment 208. The Test planning 202 can include scope 210, estimates 212 and resources 214. Test preparation stage 204 can include designing new scripts in 222, optimizing regression packages in 224, preparing test data in 226 and developing automated tests at 228. Test run stage 206 may include allocating test data at 232, running manual tests at 234, running automated tests at 236, and defect management at 238. The next test system ration 100 may also include the reporting capability 240 through all stages of the testing process. The next generation 100 test system can provide increased efficiency and functionality through all stages of testing.
[0025] Returning to figure 1, the test planning tool 110 estimates and plans the preparation, work and labor requirements involved in starting a particular software release. The test planning tool 110 provides an indication of the plurality of professional knowledge sets required to test the various test scripts, and the different professional knowledge groups associated with the available test personnel. The test planning tool 110 also provides aided estimation. The test planning tool can use a three-stage process to provide estimates at increasing levels of accuracy. Information is used from previous releases to improve estimates. Pluggable architecture for customer-specific calculations can be used. The test planning tool 110 also provides deconstruction of testing requirements.
[0026] The test planning tool 110 helps the user to reduce requirements for a required number of tests. Collaborative work capabilities allow for a 'divide and overcome' approach. The test planning tool 110 additionally provides resource forecasting through professional knowledge. Advance forecasting of professional knowledge required to support testing activities becomes possible, and graphical display of availability versus demand can be presented. The test planning tool 110 additionally helps to model the test organization by promoting cross-knowledge. The test planning tool 110 also provides regression package suggestions. Using a metadata driven approach, the system suggests an appropriate regression package. Risk-based test scores can be used to scale the package in this way. The test planning tool 110 essentially quantifies which items need to be tested, which sets of professional knowledge are required to perform the tests, and whether the required sets of professional knowledge are present in the resources provided.
[0027] The modular script designer 120 is used to design new tests or test scripts in a modular form, and increases the efficiency of the test effort and the organization by maximizing the benefit of test scripts that have been recorded by other designers , technicians or test subjects. This avoids redundancy when reusing test scripts that others have created, and that have been functionally verified.
[0028] The modular script designer 120 allows reuse of modules instead of complete scripts, since a test script is composed of several test modules, and where each module represents a logical part of a test, for example, an input in the system for an application.
[0029] Each test script created by a test individual or test designer using modular script designer 120 includes associated data corresponding to the test script's approval history and the test script's functional location or hierarchy with respect to others test scripts that are run before and after the test script in question. The data associated with a test script also includes a description of the test script's function and a description identifying the products for which the test script is used.
[0030] Once a test script has been designed using modular script designer 120, it is saved and eventually transferred to the standard test tool, which is a commercially available separate and independent test device or system used by the next generation 100 test system. As mentioned earlier, the next generation 100 test system does not replace the basic or low-level test tool. For example, the basic test tool can be a Hewlett Packard HP Quality Center ™ test tool, IBM Rational Quality Manager, or another commercially available basic test tool, which can run under the control and direction of the next test system. generation 100. The next generation test system 100 is integrated with all the various basic test tools and allows communication to and from the various basic test tools.
[0031] Figure 3 is a logical diagram of a modality of the prioritization and assignment manager 130. Users of the prioritization and assignment manager 130 can include a test lead person 302 and testers 304 and 306. The prioritization and assignment manager designation 130 receives the professional knowledge configuration information 310 and the PAM configuration information 312 from the test driver 302. The professional knowledge configuration can include information such as the professional knowledge of the test driver, or the capabilities and experiences of testers 304 and 306. The PAM configuration can include information such as the weighting, designated by the 302 test driver, of the importance of factors for each script. The test driver 302 can set the weighting level using a prioritization and assignment manager 130 user interface. The prioritization and assignment manager 130 uses the professional knowledge and PAM settings 310 and 312 to form a PAM queue 314 and to determine which 320 or 322 script is distributed to which 304 or 306 tester.
[0032] The PAM queue 314 can be a list of the scripts in order of priorities for execution. The most urgent scripts to be executed are at the top of the list. The prioritization and assignment manager 130 determines the order of the scripts within the PAM queue 314 based on the PAM 312 configuration. The PAM queue 314 receives, from the test tool 330, scripts that are scheduled for release. The test tool 330 can be a commercially available test tool like the HP Quality Center ™.
[0033] When testers access the ability to get the next 340 through a user interface, the prioritization and assignment manager 130 distributes the appropriate scripts 320 and 322 from the PAM queue 314 to the appropriate testers 304 and 306 for execution, respectively. The prioritization and assignment manager 130 determines which script 320 or 322 to distribute to which tester based on the professional knowledge, or experience and background, of testers 304 and 306. Then, the prioritization and assignment manager 130 displays script 320 or 322 for the designated tester 304 or 306 through the modular script designer 120. After the scripts are distributed, the prioritization and assignment manager 130 displays to the testers 304 and 306 the details of the designated scripts 320 and 322.
[0034] After reviewing a 320 or 322 script, the prioritization and assignment manager 130 allows testers 304, 306 to select an action from a plurality of actions. The plurality of actions can include, for example, accepting the script at 350, deferring the script at 351, scaling the script at 352, rejecting the script at 353, marking the script as out of scope at 354 or blocking the script at 355. The prioritization and assignment manager 130 can display the plurality of actions for testers 304, 306 for selection from a drop-down menu.
[0035] For example, if the prioritization and assignment manager 130 assigns a script 320 to a 304 tester, and tester 304 accepts the script at 350, the desktop toolbar loads the script to run in 360 and the prioritization and assignment manager 130 changes the status of script 320 to designate the identification of tester 304. If tester 304 postpones the script by 351, prioritization and assignment manager 130 directs tester 304 to enter information regarding the postponement, including a date and time of when this script should be postponed at 361 and a reason for postponement at 361. If tester 304 escalates the script at 352 or rejects the script at 353, the prioritization and assignment manager 130 guides tester 304 to enter information about the climb, including a reason in 362.
[0036] If tester 304 marks the script as out of scope at 354, the prioritization and assignment manager 130 guides the user to choose whether to create or link to a defect at 363, which may be a new defect or an existing defect . If the tester chooses to link to a defect, PAM allows the tester to choose a defect, or defects, to link to the script at 364. If tester 304 blocks the script at 355, the prioritization and assignment manager 130 directs tester 304 to link the script to a new or existing defect in 364, and allows test driver 302 not to designate script 320 and cancel priority in 365 to increase script 320's priority and reassign script 320.
[0037] After tester 304 introduces details or information such as a date / time to postpone a script at 361, a reason to escalate or reject a script at 362, or link a script to a defect after marking a script as out of scope or block a script in 365, the prioritization and assignment manager 130 lets tester 304 save the details or information for the script in 366 and sends the script back to the PAM queue 314. Alternatively, tester 304 can choose not to save details when canceling and closing the script in the view at 367.
[0038] The prioritization and assignment manager 130 is an important element of the next generation test system 100. The prioritization and assignment manager 130 tracks all tests or test scripts in the set of test programs as part of a list in a database and designates a priority for each of the individual test scripts based on a given set of prioritization factors and designation factors.
[0039] Prioritization factors can be script attributes, including, for example, failure impact, failure probability, driver time, business priority, estimated effort and test end date. The prioritization and assignment manager 130 can use prioritization factors to assign a numeric classification to a script for stack classification, for example, to evaluate a priority for executing the script.
[0040] Assignment factors can be user attributes evaluated to weight a user against a set of scripts that are available for testing, and can be a numerical value assigned to a script for an individual user. Assignment factors may include, for example, required professional knowledge, professional knowledge of a tester, status of a script, script workflow, tester workflow, script author, a previous user experience with a script or its predecessor, and information regarding the tester to whom the script is assigned. The prioritization and assignment manager 130 can use assignment factors to assign a numeric value to a script for an individual user. The priority of a particular test script determines its position in the test queue. The prioritization and assignment manager 130 can use the prioritization and assignment factors together for marriage and assign a script to a user at the time of a request.
[0041] The prioritization and assignment manager 130 provides centralized automated prioritization of test scripts with real-time assignment logic. All test scripts are prioritized based on a centralized set of factors, which can be centrally configured to influence the total test operation (for example, to improve performance with KPIs (Key Process Indicators)). The prioritization and assignment manager 130 additionally provides a designation based on professional knowledge, and provides a pull approach, rather than a push approach. Testers can click on a 'get next' icon on their desktop screen to be assigned the next script to run. The next script is chosen in real time based on weighted designation factors.
[0042] Each of the factors used to assign priority to the test script can be weighted. In one example, a developer may be presented with a screen having a plurality of cursors or keys corresponding to each test script. Moving the cursor to the right can increase the priority level associated with the corresponding test script, while moving the cursor to the left can decrease the priority level associated with the corresponding test script. Thus, the tester can assign a priority level to a test script based on the tester's judgment and skill. The prioritization of the various test scripts can affect the relationship and interaction between all the various test scripts. The prioritization and assignment manager 130 can perform the prioritization function in a batch mode after receiving input from the test script creator.
[0043] Some of the factors associated with the designated priority of the test scripts may have feedback or decision tree capability so that, for example, if a test is performed and returns a failure indication, the prioritization and designation manager 130 can identify the other test scripts that may be impacted by the failure.
[0044] The prioritization and assignment manager 130 also designates a set of professional expertise for each of the test scripts in the next generation 100 test system to optimize the use of workforce personnel. For example, several test scripts are designed for testing personnel based on the particular test subject's set of professional knowledge. For example, a tester can click a button or icon to get the next one on a screen to request that a new test script be sent to that tester. The prioritization and assignment manager 130 can access a database containing the professional knowledge sets for each tester, and designate the next highest priority test script for that tester based on the tester's set of professional knowledge and the set of knowledge. professional knowledge required by the test script in order to optimize system productivity and total testing personnel. Once the tester receives the test script, he will execute the test script.
[0045] The prioritization and assignment manager 130 can also provide a pluggable structure for new factors. New decision factors can be added when defining a new factor class. The factor can be presented through the user interface and can be weighted in the decision logic. This can be used to enable advanced 'Applied Statistics' decision models.
[0046] The following table shows a list of PAM configuration factors that can be used in a prioritization and assignment manager 130. Each factor can be associated with a classification or weight that can be configured for each project. Classifications can be numbers assigned to elements of a factor, for example, high, medium or low, and can be elements of factors, such as failure impact factors (IOF), business priority (BP) and probability of failure ( LOF). Weightings can be numeric values assigned to the factor itself, and can be a value between 0 and 1 in increments of 0.1. Other implementations may include less, more or other considerations and factors.



[0047] The prioritization and designation manager 130 can designate a subset of factors that are classified for prioritization and designation with the help of supporting factors. Classifiable factors may include, for example: prioritization factors such as impact of failure, probability of failure, business priority and driver time, and designation factors such as predisposition of author, predisposition of history, predisposition of home team and marriage professional knowledge.
[0048] Factors can be divided into types or categories, including: enabling factors, which can be factors that help determine whether a script can be included for designation consideration, for example, whether a script can be included in a queue. prioritizations or a queue of assignments; factors for calculating ratings and retaining data that are used directly in rating calculations; factor data, which can be factors that provide additional information about a script or user; factor weighting, which can be project level configuration data used to normalize ratings to reflect business preference; and configurable factors, which can be multiple factors that are configurable by drivers or test managers.
[0049] The following table shows examples of factors, types for


[0050] In one embodiment, the prioritization and assignment manager 130 can include a pre-queue, a priority queue, a designation queue and a status queue. The pre-queue can be a list of all scripts in the current release. The priority queue can be a list of scripts sorted in descending order by script prioritization classifications. The assignment queue can be a transitory queue created in the memory of the prioritization and assignment manager 130 when a tester 314 requests a script assignment. The assignment queue can include a subset of the priority queue. The status queue can be a state maintenance table, and can be linked to the priority queue and the designation queue.
[0051] The pre-queue can store all attributes for each script to support generation of a priority queue and a designation queue and to improve performance of the prioritization and designation manager 130. The prioritization and designation manager 130 determines which scripts store in pre-queue by script release start date and script release end date. If the pre-queue baseline date falls between the script release start date and the script release end date, then the script is in the current release, and the prioritization and assignment manager 130 stores the script in pre-queue. The pre-queue table can store data using common data types, which can be source types translated to common pre-queue data types; common status codes, which can be source status codes translated to common PAM status codes; and source-to-destination column mapping, which can be source columns mapped to common pre-queue columns. The prioritization and assignment manager 130 can use the pre-queue to connect to external test systems 638 to extract data from external test systems 638 with a standard provider to load the data into the pre-queue table.
[0052] The prioritization and assignment manager 130 can determine which scripts to store in the priority queue based on the test set start date and test set end date for each script. If the priority queue baseline date is between the test suite start date and the test suite end date for a script, then the prioritization and assignment manager 130 stores the script in the priority queue. Prioritization ratings can include failure impact rating, failure probability rating, business priority rating, and driver time rating.
[0053] The failure impact rating (IOF) can be calculated as shown in Equation 1 below. IOF classification = IOF factor x IOF weighting (Equation 1)
[0054] The classification of probability of failure (LOF) can be calculated as shown in Equation 2 below. LOF classification = LOF factor x LOF weighting (Equation 2)
[0055] The business priority (BP) classification can be calculated as shown in Equation 3 below. BP classification = BP factor x BP weighting (Equation 3)
[0056] The driver time rating (LT) can be calculated as shown in Equation 4 below. LT Classification = (Estimated Effort + LT) days + ((test set start date) - (Estimated Effort + LT)) days x LT weight (Equation 4)
[0057] The NGT priority classification can be equal to the sum of all prioritization classifications. Scripts in the priority queue can be classified by the NGT priority classification of each script. The prioritization and assignment manager 130 can use the priority queue to assign scripts to an on-demand tester, for example, when a tester clicks a button or get next icon to request a next test script.
[0058] When a tester requests a next test script, the prioritization and assignment manager 130 can generate the assignment queue based on the following enabling factors: required professional knowledge, NGT status, deleted blocked script and agenda. The prioritization and assignment manager 130 may include, in the assignment queue, scripts that have required professional knowledge that is part of the user's active professional knowledge. The prioritization and assignment manager 130 can also include, in the assignment queue, scripts that have NGT status of rejected, escalated, deferred, assigned or pre-assigned. The prioritization and assignment manager 130 can additionally include scripts that have an excluded block script factor set to "N", indicating that the script is not excluded or blocked.
[0059] The prioritization and assignment manager 130 can exclude from the assignment queue scripts that have an estimated test completion date that matches a user off schedule, where the estimated completion date (ECD) can be calculated as shown in Equations 5 and 6 below. ECD = (today's date) + (total number of days to test) (Equation 5) (total number of days to test) = Estimated Effort + LT (Equation 6)
[0060] For example, if a user's break starts on a license start date and ends on a license end date, a script can be added to the assignment queue if the estimated completion date falls before, or is earlier the license start date. But the script is not included in the assignment queue if the script's estimated completion date is later than the license start date.
[0061] The prioritization and assignment manager 130 can determine the order in which professional expertise is listed in the assignment queue based on assignment classifications, including, for example, author predisposition rating, history predisposition rating, rating rating predisposition of home team and marriage classification of professional knowledge. The prioritization and assignment manager 130 can determine the author predisposition rating as follows: if the script author is the end user, then the author predisposition rating is equal to the author predisposition weight; otherwise the author's predisposition rating is zero. History predisposition can indicate whether a particular user previously ran a version of the script. The prioritization and assignment manager 130 can determine the history predisposition rating as follows: if the end user has tested the script or a previous version of the script in the past, then the history predisposition rating is equal to the history predisposition weight ; otherwise, the history predisposition rating is zero. The prioritization and assignment manager 130 can obtain script execution history recordings from the external test tool, or script execution history recordings can be stored in a database. The prioritization and assignment manager 130 can determine the matching rank of professional expertise as follows: select a subset of scripts from the priority queue based on the test tool case (for example, a project within an external test tool ) and, considering enabling factors of designation, for each script in the subset the marriage classification of professional knowledge is equal to the count of married non-mandatory professional knowledge divided by the count of mandatory professional knowledge multiplied by the weighting of professional knowledge marriage.
[0062] The prioritization and assignment manager 130 can additionally calculate NGT assignment rating as shown in Equation 7 below. NGT designation classification = (I designation classifications x designation weight) x (NGT priority classification x NGT priority weight) (Equation 7)
[0063] Scripts in the assignment queue can be sorted in descending order by the NGT assignment classification. The prioritization and assignment manager 130 can assign scripts to a tester in the order of the NGT assignment rating, starting with the script with the highest NGT assignment rating.
[0064] The prioritization and assignment manager 130 can use the status queue to decouple user actions along with a case from a script during script assignment, or undo queue restoration. When a script status is assigned or rejected, the prioritization and assignment manager 130 will retain the script state even if the priority queue is decreased or recreated. Script state can refer to PAM attributes associated with a script case in the priority queue. The status queue can retain attributes, so the prioritization and assignment manager can continue to assign scripts while the priority queue is being restored.
[0065] Figure 4 is a pictorial diagram showing prioritization of test cases in a specific modality of a prioritization and designation manager. The prioritization and designation manager 130 can order tests based on risk, defect status and data status at 402. The prioritization and designation manager 130 can designate tests based on professional knowledge required to run the test at 404. A score risk-based test (RBT) can be assigned to each test based on failure probability and failure impact at 406. The prioritization and assignment manager 130 can also check the life status of a linked defect or other data type such as the time of appointment. The status can be, for example, a Red, Amber or Green (RAG) status based on whether any discovered defects are in the test or linked to a 408 test module. If the 408 test module has a connected defect or you have a question with the data type turned on, the prioritization and assignment manager 130 does not assign the test module to 408 until the issues are resolved.
[0066] The prioritization and assignment manager 130 can prioritize tests according to the following characteristics:
[0067] Priority-Based Ordering: The ability to prioritize tests based on the Risk-Based Test value associated with each test and the associated business priority;
[0068] Defect Impact: Tests that have defects blocking them must be moved to the bottom of the row and tests with defects next to one of the modules in the script must be moved down the row;
[0069] Data Availability: Tests that do not have available data should be moved to the bottom of the row and tests with limited available data should be moved down the list;
[0070] Designation Based on Professional Knowledge: Tests can be designed for testers based on their professional knowledge;
[0071] Pull model: The designation model can be a 'pull' model; for example, testers can request their 'next test' through the GUI and the system will designate the next test;
[0072] Designation of Preparation and Execution: The system can designate both test preparation and execution work for testers;
[0073] Automation Designation: The ability to designate scripts that are automated for the automation controller instead of sending the script to a manual tester;
[0074] Override Priority: Allow test managers to override the priority of certain scripts or scenarios to ensure execution on a particular date or time; and
[0075] Prerequisite Management: Verify that prerequisites are met before assigning a script out, where prerequisites may include time-related requirements (for example, must be run after hours) and script-dependent requirements (for example, it should be executed after a successful execution of the X script).
[0076] Figure 5 is a screen capture of a specific modality of the prioritization and assignment manager 130. User interface 500 can include a plurality of screens, or tabs, including a Basic PAM Configuration tab 502 and a tab of Additional Classification Factors 504, to guide a user through the process of configuring the prioritization and assignment manager 130. The prioritization and assignment manager 130 can guide the user to select a factor for configuration from the PAM 506 Configuration Factor drop-down menu. Then the user interface 500 displays the details of the selected factor in the priority factor panel 508, in the designation factor panel 510 and in the panel of other factors 512, and provides cursors, such as cursors for priority factors 514, for the user establishes the desired weight, which can be a value between 0 and 1, in increments of 0.1. The user can repeat this process for additional factors. Then the user can click on Calculation Priority 516 and the prioritization and assignment manager 130 updates the PAM queue 518 and displays the list of scripts in order of priority, which the prioritization and assignment manager 130 calculates based on the weights set by user. Alternatively, the user can click on Established Default Values 520 to establish all factor weights for default values. After the user completes the PAM configuration, the user can save the configuration by clicking Save 522, or close the user interface without saving by clicking Cancel 524.
[0077] The Additional Factors 504 tab may allow the tester to enter additional information regarding releases. The Additional Classification Factor tab can have a Release drop-down menu that includes test tool releases. The user can select a release from the drop-down menu that displays Release Start Date and Release End Date for the release, and a list of cycles pertaining to the release. A Guarantee Period drop-down menu (days) can allow the user to establish a guarantee period for the release, with values from 0-100. Then the prioritization and assignment manager 130 can display in the Additional Factors Tab 504 the Cycle Name of Release, Release Cycle Priority, Release Cycle Activation and Release Cycle Complexity in columns for each cycle on release. The Release Cycle Name column can include the name of each cycle. The Release Cycle Priority column can include a text box for entering a value from 0 to 100 (default value is 0), which can indicate the likelihood of automated script assignment for that cycle. The Release Cycle Activation column can include a drop-down menu with values, including: Designate scripts until cycle end date (allows designation of script until cycle end date); Designate scripts beyond the Cycle End Date (allows designation of the script beyond the cycle end date, but within the release guarantee period); and Do not designate scripts in this cycle (prevents designation of scripts in that cycle). The Release Cycle Complexity column can include a drop-down menu with complexity values, including: Very Complex, Complex, Medium, Simple and Very Simple. Complexity values can indicate the level of proficiency of professional knowledge that is preferred for testing the script. The prioritization and assignment manager 130 can assign a script to a tester who has professional knowledge matching the indicated proficiency level. The user can click on a Save key to save the selected values of Release Cycles in a database, or on a Cancel key to close the tab without saving the selected values.
[0078] The prioritization and assignment manager 130 can also process legacy scripts, which are scripts that are not modularized. Modularization is the process of grouping tests into small modules that describe a part of the functionality. The modules combine together to form scripts or test cases. The prioritization and assignment manager 130 can process legacy scripts through a legacy script user interface. The legacy scripting user interface can maintain the following factors for legacy scripting: failure probability, failure impact, lead time, business priority and professional scripting skills required. The legacy scripting user interface can allow a user to search the QC lab and display legacy scripts to the user, and can allow the user to modify the maintained factors.
[0079] Test run toolbar 140 is a toolbar visible on the tester's computer screen, and provides an indication of each main tool available to the tester, and each main test that the tester can or should call . It is conveniently displayed to the tester to increase efficiency. The test run toolbar 140 can provide online test run. The test run toolbar 140 allows a tester to load a test, run the test and record the status via the toolbar. Test scripts can be opened directly inside the toolbar, which saves space on a tester's desktop and avoids certain keystrokes, such as ALT-Tab, between screens. Defect discovery and screen capture can be part of the process. The text execution toolbar 140 can also provide a list of built-in approvals. All approvals for modules / scripts can be shown in the toolbar, and an approver can quickly open the relevant script / module for approval. The test run toolbar 140 also allows quick access to all NGT tools. A quick launch bar can be provided to enable the tester to quickly access all NGT tools. The toolbar can also handle system entry management for NGT. A user profile section is available for changing user information. The test run toolbar 140 also attaches to a self-hiding function. The test run toolbar 140 can be docked on the left side of the screen, and it can be selected to be visible or to be self-hiding. An extensible structure allows additional panels to be added to the toolbar. The test run toolbar 140 can be integrated with the prioritization and assignment manager 130 to allow a tester to request the next test that is to be run.
[0080] The automation controller 150 is an application that can run on a virtual machine, such as a set of servers, or on a computing machine in a "cloud" environment. The automation controller 150 can communicate with the prioritization and assignment manager 130 to request the next test script in the test queue, and promote opening the test script using the basic test tool described earlier, such as the HP Quick Test Pro.
[0081] The automation controller 150 can execute the test script using the basic test tool, and record the results back in the basic test tool. The next test script is then requested and the process is repeated. The automation controller 150 additionally provides modular design and partial automation. Automation scripts can be developed as modules, and each automation module can have one or more manual modules mapped to it. Partial automation enables rapid execution of automated parts of scripts. Essentially, automation control 150 is used where applicable to automate the execution of test scripts.
[0082] An additional feature of the automation controller 150 seeks to maximize the "return on investment" or "ROI" associated with each test script that is run automatically. The automation controller 150 selects for automation the test scripts that collectively provide the highest ROI. The choice to define whether to automate a particular test script using the automation controller 150 can be based on the ROI associated with the test script. For example, a private test script can be a test script that handles entry into the home system by a user. Because a test script that handles entry into the initial system per user can be used by hundreds of different test scripts without variation, this test script provides a high ROI and therefore can be a good candidate for automation. ROI is essentially a measure of increased efficiency achieved by automating the test script.
[0083] The test data supply chain 160 creates a mapping between the test script and the type or amount of data that is required by the test script in order to execute properly. When the test script is created using modular script designer 120, the creator specifies the type of data that is required for the test script and specifies the type of output data generated as a result of executing the test script, essentially quantifying the test script's input and output parameters. As different test scripts are added to the test script queue to be handled by the prioritization and assignment manager 130 and then executed, the test data supply chain 160 organizes the test scripts in an efficient way to optimize management of the input data required by the corresponding test script.
[0084] The test data supply chain 160 can provide a data catalog. Data types are modeled and stored in a database. The test data team can check data in the catalog outside of it. Also, rules can be specified to enable basic data exploration. The test data supply chain 160 also provides mapping data for testing scripts. During preparation, the type of data required is selected along with the script. Also, using the modular script designer 304, data parameters can be mapped directly to script parameters to allow automated runtime assignment. The test data supply chain 160 additionally provides monitoring of 'stock levels' and reordering. The test data supply chain 160 can monitor demand versus capacity for all types of data, and as it gets data 'used' by test scripts, the levels are updated. The test data supply chain 160 can order additional data from the data team or through automated provisioning. The test data supply chain 160 can also be integrated with PAM 306. Inventory levels can be used during prioritization to avoid running scripts that do not have test data available or where inventory levels are low.
[0085] For example, if fifty specific test scripts require type "A" input data and twenty-seven specific test scripts require type "B" input data, the test data supply chain 160 can organize the types of data required for each script and can deliver the data to the test script in a "on time" mode to avoid redundancy and reduce complexity. In addition, such test data can change throughout the test process life cycle based on the results of a particular test. In this way, the test data supply chain 160 tracks the required changes and updates the required data sets for the corresponding test scripts so that, as the test scripts are running, updated test data is available for the test script.
[0086] Reporting portal 170 handles reporting functions for the next generation 100 test system. Reporting portal 170 can be based on the Microsoft Business Intelligence system, which is a commercially available software package. Reporting portal 170 also includes offline data storage ("DW") to prevent test tool degradation. An offline DW can be maintained to avoid queries directly to the external test tool. A dimension-based data model is used for simplified reporting. In addition, data is pre-aggregated into a multidimensional online analytical processing database ("MOLAP") to provide rapid analysis. The reporting portal 170 additionally provides cube-based metrics and KPIs. Using SS Analysis Services, measures and targets may have been predefined, which can be included in the report. PowerPivot, an inclusion spreadsheet available from Microsoft Corporation, allows data to be quickly analyzed in spreadsheet programs, such as Microsoft Excel ™ for specific reports. In addition, the 170 reporting portal provides integration with solutions, such as Microsoft SharePoint ™. Where data from systems other than HP Quality Center ™ is required (for example, financial / production data), the solution can receive data from solutions, such as Microsoft SharePoint ™. The SSIS component allows the solution to be easily extended to direct data sources where required. The reporting portal 170 provides an interface to the various modules of the next generation 100 test system and handles the entire reporting generation, reporting format handling and other reporting functions.
[0087] The defect management tool 180 allows each test individual to quickly identify and track defects in the testing process. Upon discovering a new defect, several fields of the defect will be pre-populated based on the current test being performed. The defect management tool 180 can simplify the process for discovering, tracking and updating defects. The defect management tool 180 can provide a defect save list. Toolbar-based defect list with real-time Red, Amber or Green (RAG) status indicators can be provided. Red status indicates that a defect is not being actively resolved; amber status indicates that the defect is in the process of being resolved, and green indicates that the defect is resolved. The defect management tool 180 can allow quick access to the total defect information to see the latest status. The defect management tool 180 can also provide defect discovery in line with test history. While running a test via the toolbar, screenshots and test steps can be obtained. When a defect is discovered, this information is pre-populated in the defect. Screenshots and other attachments can be downloaded directly. The defect management tool 180 also reduces "alt-tab" operations. By including core defect management on the toolbar, the defect management tool 180 is able to reduce the need for an "alt-tab" for an external test system, such as HP Quality Center ™. The defect management tool 180 also enables automated script unlocking to additionally avoid time spent on the external test system. The defect management tool 180 additionally provides team-based views. Managers have a 'team view' to enable them to see the defects currently impacting their team with the relevant size and status.
[0088] Figure 6 is a high level block diagram showing the machine environment in which the next generation test system 100 can run, and the interconnection between the various hardware and software components. Each test subject can have a dedicated PC or other computer, referred to as the 630 unified workspace. The 630 unified workspace can include several next generation 100 test system modules, such as the 110 test planning tool , the modular script designer 120, the execution toolbar 140 and the defect management tool 180, running as a ".Net" client.
[0089] The prioritization and assignment manager 130, the test data supply chain 160 and its associated controller can reside on a 632 server or central server, along with a workflow system configured to scale and handle execution of various tasks. However, multiple servers can also be used. The workflow system can be provided by the Microsoft Windows Workflow Foundation, which can also run on one or more of the servers.
[0090] An integration layer 634 enables communication and functionality between the unified workspace 630, a database 636, the prioritization and designation manager 130 and the test data supply chain 160. The database 636 stores all test scripts and other required data. The integration layer 634 can be a "dll" file residing on the 632 servers and the client machine, such as the unified desktop 630, and functions as a common API interface. The integration layer 634 is decoupled from the basic downstream testing tools 638, such as an HP Quality Center ™ 644 tool or an IBM Rational Quality Manager 646 because of a pluggable architecture.
[0091] The prioritization and assignment manager 130 and the test data supply chain 160 and its associated controller run according to the workflow system, which resides on server 632. Automation controller 150 preferably resides in a separate and independent server or set of servers 650. The server that runs automation controller 150 can be similar to the computer that runs unified desktop 630 because automation controller 150 essentially emulates the unified desktop when running test scripts.
[0092] The automation controller 150 receives the prioritized test scripts from the prioritization and assignment manager 130, and accesses multiple 640 virtual machines to run its tests. 640 virtual machines can be "cloud based" machines. Each 640 virtual machine includes a functional automation tool, such as Hewlett Packard's HP Quick Test Pro, referred to as QTP, which receives the test script from the prioritization and designation manager 130 and then executes the actual test script. Test results are reported back through the 634 integration layer.
[0093] The next generation test system 100 can be incorporated as a system cooperating with computer hardware components and / or as methods implemented by computer. The next generation test system 100 can include a plurality of software modules or subsystems. The modules or subsystems can be implemented in hardware, software, firmware or any combination of hardware, software and firmware, and may or may not reside in a single physical or logical space. For example, the modules or subsystems referred to in this document and which may or may not be shown in the drawings can be located remotely from each other and can be coupled via a communication network.
[0094] Figure 7 is a high level hardware block diagram of a modality of a computer or machine 700, such as the server 632 and 650, the PC running the unified desktop 630, and the virtual machines 640. Next generation test system 100 can be incorporated as a system cooperating with computer hardware components and / or as computer implemented methods. The next generation test system 100 can include a plurality of software modules or subsystems. The modules or subsystems can be implemented in hardware, software, firmware or any combination of hardware, software and firmware, and may or may not reside in a single physical or logical space. For example, the modules or subsystems referred to in this document and which may or may not be shown in the drawings, can be located remotely from each other and can be coupled via a communication network.
[0095] The computer or machine 700 can be a personal computer or a server and can include various hardware components, such as RAM 714, ROM 716, hard disk storage 718, cache memory 720, base storage data 722 and more (also referred to as the "memory subsystem 726"). Computer 700 can include any suitable processing device 728, such as a computer, microprocessor, RISC processor (reduced instruction set computer), CISC processor (complex instruction set computer), large computer, workstation, single chip computer, distributed processor, server, controller, microcontroller, different logic computer and more, as is known in the art. For example, the 728 processing device can be an Intel Pentium® microprocessor, x86 compatible microprocessor, or equivalent device, and it can be embedded in a server, a personal computer, or any suitable computing platform.
[0096] The memory subsystem 726 can include any suitable storage components, such as RAM, EPROM (electrically programmable ROM), flash memory, dynamic memory, static memory, FIFO memory (first in, first out), LIFO memory (last to enter, first to leave), circular memory, semiconductor memory, bubble memory, temporary storage memory, disk memory, optical memory, cache memory and more. Any suitable form of memory can be used, whether fixed storage on magnetic media, storage on a semiconductor device, or remote storage accessible via a communication link. A user or system interface 730 can be coupled to computer 700 and can include several input devices 736, such as switches selectable by the system manager and / or a keyboard. The user interface may also include suitable output devices 740, such as an LCD display, a CRT, various LED indicators, a printer and / or a speech output device, as is known in the art.
[0097] To promote communication between the computer 700 and external sources, a communication interface 742 can be operationally coupled to the computer system. The communication interface 742 can be, for example, a local area network, such as an Ethernet network, intranet, Internet or other suitable network 744. The communication interface 742 can also be connected to a public switched telephone network (PSTN) 746 or POTS (traditional telephone system), which can promote communication via the Internet 744. Any suitable commercially available communication device or network can be used.
[0098] Figure 8 shows a conceptual diagram of a modality of the NGT 100 system. As shown in figure 8, the NGT 100 system can include a presentation layer 810, a business component layer 820, the integration layer 634 and an 840 data layer. The presentation layer 810 includes the 812 user interface (UI) components that render and format data for display to 802 users, including project managers, testers and test drivers, and obtain and validate data that 802 users enter. The presentation layer 810 also includes the UI 814 process components that trigger the process using separate user process components to avoid hard coding the process flow and state management logic in the UI elements themselves. The 820 business component layer implements business logic and workflow. The business component layer 820 includes the business components 822 that implement the business logic of the application. The business component layer 820 also includes business entities 824 and the business workflow 826. Business entities are data transfer objects in the business component layer 820. These are common objects that can be used through layers, including the presentation layer 810, to pass data in all directions.
[0099] The integration layer 634 provides server-side agnostic access to the upstream layers (the business component layer 820 and the presentation layer 810), and enables connection capability through a common interface for one or more server-side systems such as QC, Rational and Team Foundation Server. The 634 integration layer implements the following design pattern: an abstract base class inherits from ProvideBase (which is a class available with Microsoft's .Net framework); each concrete implementer in turn inherits from the above abstract class; Appropriate Provider (which can be an NGT component that communicates with a server side system, such as QC) is loaded based on the type definition in a .config file. The integration layer 634 also includes the integration facade 832. The integration facade 832 exposes a simplified interface for the business component layer 820, and reads data from a combination of data transfer objects from one or more repositories or caches server-side (for example, Windows Server R2) and consolidate them into a super common data transfer object to return to the 820 business component layer. The 634 integration layer also includes the NGT 834 components that connect via means of interface between the integration facade 832 and the data layer 840 and can provide mapping functionality for the integration layer 634, if required. Integration layer 634 also includes caching components 836 and test tool components 838. Test tool components 838 are providers that serve requests to read / write data from an 804 test tool.
[00100] The data layer 840 includes the data access components 842 that centralize the logic necessary to access the underlying NGT data storage, exposing methods to allow easier and more transparent access to the database. It also includes 844 data helpers / utilities that are used to centralize generic data access functionality such as managing database connections. Data layer 840 also includes service agents 836 that provide Windows Communication Foundation services proxy for speech to application server services. The 840 data layer can be an Enterprise Library Data Access Application Block or a custom designed data layer. Alternatively, relational object mapping tools, such as Entity Spaces (available from EntitySpaces, LLP), Genome (available from TechTalk, GmbH), LINQ-to-SQL (available from Microsoft Corporation), Entity Framework (also available from Microsoft Corporation), or LLBLGen Pro (available from Solutions Design), can be used to generate the 840 data layer components.
[00101] The transversal functions 805 in the NGT 100 may include, for example, security, exception handling, blocking and communication. The NGT 100 can also include a local 806 cache. Outputs from the NGT 100 can include, for example, the 807 e-mail functionality or other information communication functionality. E-mails may include notifications to testers regarding script rejection or approval, notifications to approvers regarding scripts that are ready for review, and notifications regarding security issues, system exceptions and auditing. The NGT 100 can also communicate information to the test tool 330 and to an NGT 636 database.
[00102] Figure 9 shows a logic diagram of a modality of the NGT 100 system. In the modality, the display layer 1410 can include a plurality of components UI 1412 and processes UI 1414, including an administration interface 911, a bar execution tools 912, a script module designer 913, a unified workspace 102, a defect tracking interface 914, KPI views 915 and an approval review interface 916. The business component layer 1420 can include a plurality of components, including a user profile component 921, a search services component 922, a workflow services component 923, a business rules component 924, a time maintenance component 925, a component of authorization 926 and an authentication component 927. Integration layer 634 can include an integration facade 1432, which can include aggregation 931, integration APIs 932 and decomposition 933. Integration layer 634 can also include providers 934, caching 935 and data transformation 935. Data layer 1440 can provide access to a data provider 941, helpers / data utilities 942 and the Data Services API 943.
[00103] Figure 10 is a diagram of high-level hardware blocks from another modality of the NGT system. The NGT 100 system and its key components 110, 120, 130, 140, 150, 160, 170 and 180 can be incorporated as a system cooperating with computer hardware components, such as a 728 processing device, and / or as methods computer-implemented. The NGT 100 system can include a plurality of software components or subsystems. Components or subsystems, such as the test planning tool 110, the modular script designer 120, the prioritization and designation manager 130, the test execution toolbar 140, the automation controller 150, the supply chain test data 160, reporting portal 170 and / or defect management tool 180, can be implemented in hardware, software, firmware or any combination of hardware, software and firmware, and may or may not reside in a single physical or logical space. For example, the modules or subsystems referred to in this document and which may or may not be shown in the drawings can be located remotely from each other and can be coupled via a communication network.
[00104] The logic, circuitry and processing described above can be encoded or stored on machine-readable or computer-readable media such as a read-only compact disc (CDROM), magnetic or optical disk, flash memory, random access memory (RAM) or read-only memory (ROM), programmable and erasable read-only memory (EPROM) or other machine-readable media such as, for example, instructions for execution by a processor, controller, or other device processing.
[00105] The media can be implemented as any device that contains, stores, communicates, propagates or transports executable instructions for use by an executable system, device or instruction device or in connection with it. Alternatively or additionally, the logic can be implemented as analog or digital logic using hardware, such as one or more integrated circuits, or one or more processors executing instructions; or in software in an application programming interface (API) or in a Dynamic Link Library (DLL), functions available in shared memory or defined as local or remote procedure calls; or as a combination of hardware and software.
[00106] In other implementations, the logic can be represented in a signal or in a propagated signal medium. For example, instructions that implement the logic of any given program can take the form of an electronic, magnetic, optical, electromagnetic, infrared or other signal. The systems described above can receive a signal like this on a communication interface, such as a fiber optic interface, antenna, or other analog or digital signal interface, retrieve the signal instructions, store them in a machine-readable memory and / or run them with a processor.
[00107] Systems can include additional or different logic and can be implemented in many different ways. A processor can be implemented as a controller, microprocessor, microcontroller, application specific integrated circuit (ASIC), distinct logic, or a combination of other types of circuits or logic. Similarly, memories can be DRAM, SRAM, Flash, or other types of memory. Parameters (for example, conditions and thresholds) and other data structures can be stored separately and managed, can be incorporated into a single memory or database, or can be organized logically and physically in many different ways. Programs and instructions can be part of a single program, separate programs, or distributed across multiple memories and processors.
[00108] Although various modalities of the invention have been described, it will be apparent to persons of ordinary skill in the art that many other modalities and implementations are possible within the scope of the invention. In this way, the invention is not to be restricted except considering the appended claims and their equivalences.
权利要求:
Claims (14)
[0001]
1. Method of prioritizing and assigning test scripts on an integrated test platform, the test platform configured to organize, manage and promote debugging of test scripts prepared by a plurality of test individuals, the method comprising: receiving a plurality of test scripts; apply a predetermined set of prioritization factors to each test script, the prioritization factors, including at least two: failure impact, probability of failure, lead time, business priority, estimated effort, and a test end date; assign a weighting value for each factor based on the relative importance of the prioritization factor; establish a priority value for each test script based on the weighted prioritization factors corresponding to the test script; assign the test script to a position in a plurality of queues for subsequent execution based on the corresponding priority value, the assigned test script associated with one or more predisposition factors, the queues including a pre-queue, a priority queue and an assignment queue; characterized by the fact that it still comprises: selecting a subset of test scripts in the priority queue, where each test script has an ability to classify the corresponding classification which is equal to the count of non-mandatory married professional knowledge divided by the knowledge count mandatory professionals multiplied by the matching weight of professional knowledge and the test script has an exclusion block script factor set to "N"; and identify a test script selected from the subset of test scripts in the assignment queue and route the selected test script to a test individual if the predisposition factors indicate that the test script requirements match the corresponding predisposition factors of the test individual.
[0002]
2. Method according to claim 1, characterized by the fact that the predetermined set of factors comprises designation factors and prioritization factors, and in which the designation factors are based on attributes of the test individual and the prioritization factors are based on the attributes of the test script.
[0003]
3. Method according to claim 1, characterized by the fact that the priority value is established for each test script in a batch mode process.
[0004]
4. Method according to claim 1, characterized by the fact that the predisposing factors include an indication of a set of professional knowledge required by the test script, and an indication of the set of skills possessed by the test individual.
[0005]
5. Method according to claim 1, characterized by the fact that predisposition factors include a history of predisposition that indicates whether the test subject has previously executed a version of the selected test script.
[0006]
6. Method according to claim 1, characterized by the fact that the priority value is established in real time.
[0007]
7. Method according to claim 1, characterized by the fact that the test subject receives a test script in real time when requested by the test subject.
[0008]
8. Method, according to claim 1, characterized by the fact that it also includes: obtaining information on the configuration of skills for at least two test subjects; and where the identification of the selected test script includes the use of skill setting information for at least two test individuals.
[0009]
9. Method of prioritizing and assigning test scripts on an integrated test platform, the test platform configured to organize, manage and promote debugging of test scripts prepared by a plurality of test individuals, the method comprising: receiving a plurality of test scripts; apply a predetermined set of factors to each test script, factors including prioritization factors based on script attributes and designation factors based on attributes of a test individual; the prioritization factors, including at least two among: failure impact, probability of failure, lead time, business priority, estimated effort and test end date; assign a weighting value for each factor based on the relative importance of each factor; establish a priority rating value for each test script, the priority rating value being based on the weighted priority factors corresponding to each test script; establish a designation rating value for each test script, the designation rating value being based on the priority rating value corresponding to the corresponding weighted factors; characterized by the fact that it still comprises determining a subset of test scripts based on attributes of the test individual and attributes of each test script, where each test script of the subset of test scripts has a skill match rating that it is equal to the non-mandatory matched professional knowledge count divided by the mandatory professional knowledge count multiplied by the matched weight of professional knowledge and the test scripts have an exclusion blocking script factor set to "N"; save each test script in a plurality of queues, the queues including a pre-queue, a priority queue and an assignment queue; and designating, based on the corresponding assignment rating value, each test script in the subset of test scripts a position in the assignment queue for subsequent assignment to the individual under test.
[0010]
10. Method according to claim 9, characterized by the fact that the priority rating value is established for each test script in a batch process mode.
[0011]
11. Method according to claim 9, characterized by the fact that script attributes include required test skills and expected completion date, and attributes of the test individual include test abilities of the test individual or a schedule of the individual's time off .
[0012]
12. Method according to claim 9, characterized in that the priority rating value is established for each test script in a batch process mode.
[0013]
13. Method according to claim 9, characterized by the fact that script attributes include required testing skills and expected completion date, and attributes of the test individual include testing abilities of the test individual or an individual's time off schedule .
[0014]
14. System to prioritize and designate test scripts on an integrated test platform, the test platform configured to organize, manage and promote debug test scripts prepared by a plurality of test individuals, the system characterized by the fact that it comprises : a computer processor; and a memory in communication with the computer processor, the memory comprising logic for a prioritization and assignment manager component, where the logic when executed by the computer processor induces the processor to: receive a plurality of test scripts; apply a predetermined set of factors to each test script, the factors including prioritization factors based on script attributes and designation factors based on the attributes of a test individual, the prioritization factors, including at least two among: failure impact , probability of failure, lead time, business priority, estimated effort, and test end date; assign a weighting value for each factor based on the relative importance of the factor; establish a priority rating value for each test script, the priority rating value being based on the weighted priority factors corresponding to each test script; establish a designation rating value for each test script, the designation rating value being based on the priority rating value corresponding to the corresponding weighted factors; save, each test script in a plurality of queues, the queues including a pre-queue, a priority queue and an assignment queue; select a subset of test scripts from the priority queue, where each test script has a skill match rating that is equal to the non-mandatory matched professional knowledge count divided by the mandatory professional knowledge count multiplied by the matching knowledge weight professionals, and test scripts have an exclusion blocking script factor set to "N"; and designating, based on the corresponding designation rating value, a subset of test scripts for designation queue positions for subsequent designation for the test individual, where the subset of test scripts is determined based on the individual's attributes of test and script attributes.
类似技术:
公开号 | 公开日 | 专利标题
BR102012008540B1|2020-11-17|method and system for prioritizing and assigning test scripts to an integrated testing platform.
AU2012202261B2|2015-01-22|Test data supply chain manager for an integrated testing platform
US7747736B2|2010-06-29|Rule and policy promotion within a policy hierarchy
US9183124B2|2015-11-10|Automation controller for next generation testing system
US9448915B2|2016-09-20|Modular script designer for next generation testing system
US20160019484A1|2016-01-21|System and method for managing resources of a project
Singh et al.2011|Bug tracking and reliability assessment system |
US20130254737A1|2013-09-26|Project delivery system
US10311393B2|2019-06-04|Business process model analyzer and runtime selector
US20160012366A1|2016-01-14|System and method for optimizing project operations, resources and associated processes of an organization
CA2775162C|2016-12-13|Test data supply chain manager for an integrated testing platform
Chanda et al.2013|Application lifecycle management
CA2775165C|2016-02-09|Automation controller for next generation testing system
BR102012009133B1|2022-02-08|COMPUTER IMPLEMENTED METHOD AND SYSTEM TO PROVIDE TEST DATA FOR SCRIPTS
Nouman2016|Software Build History Dossier for Development and Testing
NEGI2021|TASK TRACKER
EM2012|Effective testing: A case study approach for improving test efficiency
Packard2009|RISC-Risk Identification Status and Control
Shaik0|A New software Test Model “Graph Model”
Subramanian2011|Purchasing Contracts Management System
US20080263453A1|2008-10-23|Method and apparatus for process configuration
同族专利:
公开号 | 公开日
CA2773930C|2017-03-07|
BR102012008540A2|2015-08-18|
AU2012202053A1|2012-11-01|
US20120266023A1|2012-10-18|
CN102789414B|2016-05-11|
CN102789414A|2012-11-21|
CA2773930A1|2012-10-12|
US9286193B2|2016-03-15|
AU2012202053B2|2013-10-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5943640A|1995-10-25|1999-08-24|Maxtor Corporation|Testing apparatus for digital storage device|
US20030055705A1|2001-06-19|2003-03-20|International Business Machines Corporation|Method and apparatus for skills-based task routing|
US7165240B2|2002-06-20|2007-01-16|International Business Machines Corporation|Topological best match naming convention apparatus and method for use in testing graphical user interfaces|
US20040103396A1|2002-11-20|2004-05-27|Certagon Ltd.|System for verification of enterprise software systems|
GB2397905B|2003-01-29|2005-04-13|Motorola Inc|Method and apparatus for categorising test scripts|
US20060041864A1|2004-08-19|2006-02-23|International Business Machines Corporation|Error estimation and tracking tool for testing of code|
US20060112388A1|2004-11-22|2006-05-25|Masaaki Taniguchi|Method for dynamic scheduling in a distributed environment|
US7231210B1|2004-12-29|2007-06-12|At&T Corp.|Method and apparatus for automatically generating call flow test scripts|
US7581138B2|2005-10-24|2009-08-25|International Business Machines Corporation|Method, system and computer program for managing test processes based on customized UML diagrams|
CN1955945A|2005-10-25|2007-05-02|国际商业机器公司|Method and device for automatic generating test executive routine sequence of software test process|
US8266592B2|2008-04-21|2012-09-11|Microsoft Corporation|Ranking and optimizing automated test scripts|
US8893086B2|2009-09-11|2014-11-18|International Business Machines Corporation|System and method for resource modeling and simulation in test planning|
KR101132560B1|2010-06-09|2012-04-03|강원대학교산학협력단|System and method for automatic interface testing based on simulation for robot software components|WO2011146750A2|2010-05-19|2011-11-24|Google Inc.|Bug clearing house|
US8793535B2|2011-07-21|2014-07-29|Microsoft Corporation|Optimizing system usage when running quality tests in a virtual machine environment|
EP2845095A4|2012-04-30|2015-12-23|Hewlett Packard Development Co|Prioritization of continuous deployment pipeline tests|
US9329915B1|2012-05-08|2016-05-03|Amazon Technologies, Inc.|System and method for testing in a production environment|
US8984341B1|2012-05-08|2015-03-17|Amazon Technologies, Inc.|Scalable testing in a production system with autoscaling|
US8977903B1|2012-05-08|2015-03-10|Amazon Technologies, Inc.|Scalable testing in a production system with autoshutdown|
US9285973B1|2012-08-08|2016-03-15|John S Gable|Systems and methods for detecting and displaying bias|
WO2014117320A1|2013-01-29|2014-08-07|Hewlett-Packard Development Company, L.P.|Generating test code to test executable code|
GB2512861A|2013-04-09|2014-10-15|Ibm|Method and system for performing automated system tests|
US8997052B2|2013-06-19|2015-03-31|Successfactors, Inc.|Risk-based test plan construction|
US9396039B1|2013-09-20|2016-07-19|Amazon Technologies, Inc.|Scalable load testing using a queue|
CN103701672A|2014-01-07|2014-04-02|国家电网公司|230-MHz acquisition terminal telecommunication unit matching compatibility testing method based on SGWM |
US9201768B1|2014-02-06|2015-12-01|Amdoes Software Systems Limited|System, method, and computer program for recommending a number of test cases and effort to allocate to one or more business processes associated with a software testing project|
US9507695B2|2014-04-14|2016-11-29|International Business Machines Corporation|Risk-based test coverage and prioritization|
US20160274962A1|2015-03-19|2016-09-22|Alcatel-Lucent Usa Inc.|Self-Tuning Troubleshooting Scripts|
US9740590B2|2015-03-27|2017-08-22|International Business Machines Corporation|Determining importance of an artifact in a software development environment|
US9424171B1|2015-04-23|2016-08-23|International Business Machines Corporation|Resource-constrained test automation|
US10452508B2|2015-06-15|2019-10-22|International Business Machines Corporation|Managing a set of tests based on other test failures|
US10176426B2|2015-07-07|2019-01-08|International Business Machines Corporation|Predictive model scoring to optimize test case order in real time|
CN105912465B|2016-04-05|2018-05-18|上海航天计算机技术研究所|The test method of spaceborne rail control Guidance & Navigation software|
US10672013B2|2016-07-14|2020-06-02|Accenture Global Solutions Limited|Product test orchestration|
CN106354631A|2016-08-23|2017-01-25|北京中电华大电子设计有限责任公司|Rapid software system regression testing method and device|
CN106980573B|2016-10-26|2020-11-20|创新先进技术有限公司|Method, device and system for constructing test case request object|
AU2018200643A1|2017-03-09|2018-09-27|Accenture Global Solutions Limited|Smart advisory for distributed and composite testing teams based on production data and analytics|
CN107728600A|2017-09-06|2018-02-23|中国航空工业集团公司西安飞行自动控制研究所|A kind of BIT test command systems of selection based on priority|
US10073763B1|2017-12-27|2018-09-11|Accenture Global Solutions Limited|Touchless testing platform|
US10642721B2|2018-01-10|2020-05-05|Accenture Global Solutions Limited|Generation of automated testing scripts by converting manual test cases|
TWI676906B|2018-04-13|2019-11-11|和碩聯合科技股份有限公司|Prompt method and computer system thereof|
US11132288B2|2018-04-26|2021-09-28|EMC IP Holding Company LLC|Data-driven scheduling of automated software program test suites|
WO2020159539A1|2019-02-01|2020-08-06|Dell Products L.P.|Smart selection of test scripts for commodity testing on manufacturing floor|
法律状态:
2015-08-18| B03A| Publication of an application: publication of a patent application or of a certificate of addition of invention|
2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law|
2019-10-15| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure|
2020-03-03| B06A| Notification to applicant to reply to the report for non-patentability or inadequacy of the application according art. 36 industrial patent law|
2020-06-23| B09A| Decision: intention to grant|
2020-11-17| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 11/04/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US201161474516P| true| 2011-04-12|2011-04-12|
US61/474,516|2011-04-12|
[返回顶部]